情绪分析是最基本的NLP任务,用于确定文本数据的极性。在多语言文本领域也有很多工作。仍然讨厌和令人反感的语音检测面临着挑战,这是由于数据的可用性不足,特别是印度和马拉地赛等印度语言。在这项工作中,我们考虑了印地语和马拉地养文本的仇恨和令人反感的语音检测。使用艺术的深度学习方法的状态制定了该问题作为文本分类任务。我们探讨了CNN,LSTM等不同的深度学习架构,以及多语言伯爵,烟草和单晶罗伯塔等伯特的变化。基于CNN和LSTM的基本模型将使用快文文本嵌入式增强。我们使用HASOC 2021 HINDI和MARATHI讨论语音数据集来比较这些算法。 Marathi DataSet由二进制标签和后印度数据集组成,包括二进制和更精细的粗糙标签。我们表明,基于变压器的模型表现了最佳甚至基本型号以及FastText Embeddings的基本模型具有竞争性能。此外,通过普通的超参数调谐,基本模型比细粒度的Hindi数据集上的基于BERT的模型更好。
translated by 谷歌翻译
Modeling of turbulent combustion system requires modeling the underlying chemistry and the turbulent flow. Solving both systems simultaneously is computationally prohibitive. Instead, given the difference in scales at which the two sub-systems evolve, the two sub-systems are typically (re)solved separately. Popular approaches such as the Flamelet Generated Manifolds (FGM) use a two-step strategy where the governing reaction kinetics are pre-computed and mapped to a low-dimensional manifold, characterized by a few reaction progress variables (model reduction) and the manifold is then ``looked-up'' during the runtime to estimate the high-dimensional system state by the flow system. While existing works have focused on these two steps independently, in this work we show that joint learning of the progress variables and the look--up model, can yield more accurate results. We build on the base formulation and implementation ChemTab to include the dynamically generated Themochemical State Variables (Lower Dimensional Dynamic Source Terms). We discuss the challenges in the implementation of this deep neural network architecture and experimentally demonstrate it's superior performance.
translated by 谷歌翻译
Recent developments in quantum computing and machine learning have propelled the interdisciplinary study of quantum machine learning. Sequential modeling is an important task with high scientific and commercial value. Existing VQC or QNN-based methods require significant computational resources to perform the gradient-based optimization of a larger number of quantum circuit parameters. The major drawback is that such quantum gradient calculation requires a large amount of circuit evaluation, posing challenges in current near-term quantum hardware and simulation software. In this work, we approach sequential modeling by applying a reservoir computing (RC) framework to quantum recurrent neural networks (QRNN-RC) that are based on classical RNN, LSTM and GRU. The main idea to this RC approach is that the QRNN with randomly initialized weights is treated as a dynamical system and only the final classical linear layer is trained. Our numerical simulations show that the QRNN-RC can reach results comparable to fully trained QRNN models for several function approximation and time series prediction tasks. Since the QRNN training complexity is significantly reduced, the proposed model trains notably faster. In this work we also compare to corresponding classical RNN-based RC implementations and show that the quantum version learns faster by requiring fewer training epochs in most cases. Our results demonstrate a new possibility to utilize quantum neural network for sequential modeling with greater quantum hardware efficiency, an important design consideration for noisy intermediate-scale quantum (NISQ) computers.
translated by 谷歌翻译
人工智能,尤其是机器学习(ML),越来越多地开发和部署,以支持各种环境中的医疗保健。但是,如果要大规模采用基于ML的临床决策支持(CDS)技术,则需要便携。在这方面,在一个机构开发的模型应在另一个机构中重复使用。然而,有许多可移植性故障的例子,尤其是由于ML模型的天真应用。可移植性失败会导致次优护理和医疗错误,最终可以阻止在实践中采用基于ML的CD。可能受益于增强便携性的特定医疗挑战是预测30天的再入院风险。迄今为止的研究表明,深度学习模型可以有效地建模这种风险。在这项工作中,我们通过对再入口预测模型的跨站点评估来研究模型可移植性的实用性。为此,我们应用了一个经常性的神经网络,增强了自我注意事项并与专家功能融合在一起,以为两个独立的大规模索赔数据集构建重新启动预测模型。我们进一步提出了一种新颖的转移学习技术,该技术适应了著名的Born-Again Network(BAN)培训方法。我们的实验表明,在一个机构训练并在另一家机构进行测试的ML模型的直接应用比在同一机构训练和测试的模型要差。我们进一步表明,基于禁令的转移学习方法比仅在单个机构的数据上接受培训的模型要好。值得注意的是,这种改进在两个站点之间都是一致的,并且在一次重新培训之后发生,这说明了廉价和一般模型转移风险预测的潜力。
translated by 谷歌翻译
大多数无监督的NLP模型代表了语义空间中单点或单个区域的每个单词,而现有的多感觉单词嵌入物不能代表像素序或句子等更长的单词序列。我们提出了一种用于文本序列(短语或句子)的新型嵌入方法,其中每个序列由一个不同的多模码本嵌入物组表示,以捕获其含义的不同语义面。码本嵌入式可以被视为集群中心,该中心总结了在预训练的单词嵌入空间中的可能共同出现的单词的分布。我们介绍了一个端到端的训练神经模型,直接从测试时间内从输入文本序列预测集群中心集。我们的实验表明,每句话码本嵌入式显着提高无监督句子相似性和提取摘要基准的性能。在短语相似之处实验中,我们发现多面嵌入物提供可解释的语义表示,但不优于单面基线。
translated by 谷歌翻译
Existing popular methods for semi-supervised learning with Graph Neural Networks (such as the Graph Convolutional Network) provably cannot learn a general class of neighborhood mixing relationships. To address this weakness, we propose a new model, MixHop, that can learn these relationships, including difference operators, by repeatedly mixing feature representations of neighbors at various distances. MixHop requires no additional memory or computational complexity, and outperforms on challenging baselines. In addition, we propose sparsity regularization that allows us to visualize how the network prioritizes neighborhood information across different graph datasets. Our analysis of the learned architectures reveals that neighborhood mixing varies per datasets. 1 We use "like", as graph edges are not axis-aligned.
translated by 谷歌翻译